79 research outputs found

    Comparison of input devices in an ISEE direct timbre manipulation task

    Get PDF
    The representation and manipulation of sound within multimedia systems is an important and currently under-researched area. The paper gives an overview of the authors' work on the direct manipulation of audio information, and describes a solution based upon the navigation of four-dimensional scaled timbre spaces. Three hardware input devices were experimentally evaluated for use in a timbre space navigation task: the Apple Standard Mouse, Gravis Advanced Mousestick II joystick (absolute and relative) and the Nintendo Power Glove. Results show that the usability of these devices significantly affected the efficacy of the system, and that conventional low-cost, low-dimensional devices provided better performance than the low-cost, multidimensional dataglove

    Look who's talking:the GAZE groupware system

    Get PDF
    The GAZE Groupware System is a multiparty mediated system which provides support for gaze awareness in communication and collaboration. The system uses an advanced, desk-mounted eyetracker to metaphorically convey gaze awareness in a 3D virtual meeting room and within shared documents

    Augmenting and Sharing Memory with eyeBlog

    Get PDF
    eyeBlog is an automatic personal video recording and publishing system. It consists of ECSGlasses [1], which are a pair of glasses augmented with a wireless eye contact and glyph sensing camera, and a web application that visualizes the video from the ECSGlasses camera as chronologically delineated blog entries. The blog format allows for easy annotation, grading, cataloging and searching of video segments by the wearer or anyone else with internet access. eyeBlog reduces the editing effort of video bloggers by recording video only when something of interest is registered by the camera. Interest is determined by a combination of independent methods. For example, recording can automatically be triggered upon detection of eye contact towards the wearer of the glasses, allowing all face-to-face interactions to be recorded. Recording can also be triggered by the detection of image patterns such as glyphs in the frame of the camera. This allows the wearer to record their interactions with any object that has an associated unique marker. Finally, by pressing a button the user can manually initiate recording

    Designing Awareness with Attention-based Groupware

    Get PDF
    ABSTRACT A design rationale for the implementation of awareness features in the attention-based GAZE Groupware System is discussed. Attention-based groupware uses a framework for the design of awareness features based on the capturing, conveyance and rendering of information about human attention. The aim is to integrally provide information about the focus of conversational as well as workspace activities of participants. Our design themes were: implicit capturing of awareness information; scalability of networked awareness information; and representation of awareness information using natural affordances. Eye tracking provides a direct and noncommand way of capturing human attention. It allows attentive information to be conveyed separate from the communication signal itself, in a machine-readable format. This eases the integration of Conversational and Workspace Awareness information, and allows network bandwidth consumption of this information to scale linearly with the number of users. Attentional focus also provides an organizational metaphor for the rendering of awareness information. By combining a more strict WYSIWIS general communilaboration tool (a 3D virtual meeting room) with more relaxed-WYSIWIS focused collaboration tools (2D editors), the attention of human participants can be guided and represented from broad to focused activity

    The GAZE Groupware System: Mediating Joint Attention in Multiparty Communication and Collaboration

    Get PDF
    In this paper, we discuss why, in designing multiparty mediated systems, we should focus first on providing non-verbal cues which are less redundantly coded in speech than those normally conveyed by video. We show how conveying one such cue, gaze direction, may solve two problems in multiparty mediated communication and collaboration: knowing who is talking to whom, and who is talking about what. As a candidate solution, we present the GAZE Groupware System, which combines support for gaze awareness in multiparty mediated communication and collaboration with small and linear bandwidth requirements. The system uses an advanced, deskmounted eyetracker to metaphorically convey gaze awareness in a 3D virtual meeting room and within shared documents. KEYWORDS: CSCW, multiparty videoconferencing, awareness, attention, gaze direction, eyetracking, VRML 2. INTRODUCTION With recent advances in network infrastructure and computing power, desktop video conferencing and groupware systems are ra..
    • …
    corecore